共同检测和隔离Covid-19患者对于成功实施缓解策略并最终遏制疾病扩散至关重要。由于在每个国家 /地区进行的每日共同测试数量有限,因此模拟COVID-19的扩散以及目前每种缓解策略的潜在影响仍然是管理医疗保健系统和指导决策者的最有效方法之一。我们介绍了Covidhunter,这是一种灵活而准确的Covid-19爆发模拟模型,评估了当前适用于该地区的缓解措施,可预测Covid-19统计数据(每日案件,住院和死亡人数),并就何种建议提供建议。力量即将进行的缓解措施应该是。 Covidhunter的关键思想是通过模拟考虑到外部因素的影响,例如环境条件(例如气候,温度,湿度,湿度),关注的不同变体,疫苗接种率和缓解措施。 Covidhunter以瑞士为案例研究,估计我们正在经历一场致命的新浪潮,该浪潮将于2022年1月26日达到顶峰,这与我们2020年2月的浪潮非常相似。决策者只有一个选择是为了增加30天的当前缓解措施的强度。与现有模型不同,Covidhunter模型可以准确监视,并预测COVID-19造成的病例,住院和死亡人数。我们的模型可以灵活地进行配置,并且可以易于修改,以在不同的环境条件和缓解措施下对不同方案进行建模。我们在https://github.com/cmu-safari/covidhunter上发布了covidhunter实现的源代码。
translated by 谷歌翻译
背景:COVID-19患者的早期检测和隔离对于成功实施缓解策略并最终遏制疾病扩散至关重要。由于在每个国家 /地区进行的每日共同测试数量有限,因此模拟COVID-19的扩散以及目前每种缓解策略的潜在影响仍然是管理医疗保健系统和指导决策者的最有效方法之一。方法:我们介绍了Covidhunter,这是一种灵活而准确的Covid-19爆发模拟模型,该模型评估了应用于区域的当前缓解措施,并提供有关即将进行的缓解措施的强度的建议。 Covidhunter的关键思想是通过模拟考虑到外部因素的影响,例如环境条件(例如气候,温度,湿度,湿度)和缓解措施。结果:使用瑞士作为案例研究,Covidhunter估计,如果政策制定者放宽30天的缓解措施50%,那么医院病床的日常容量和每日死亡人数平均每天的死亡人数平均增加了5.1倍,则会增加5.1倍谁可能会占用ICU床和呼吸机一段时间。与现有模型不同,Covidhunter模型可以准确监视,并预测COVID-19造成的病例,住院和死亡人数。我们的模型可以灵活地配置,并且可以易于修改,以在不同的环境条件和缓解措施下对不同方案进行建模。可用性:我们在https://github.com/cmu-safari/covidhunter上发布了covidhunter实现的源代码,并展示如何在任何情况下灵活配置我们的模型,并轻松地将其扩展为不同的度量和条件。
translated by 谷歌翻译
Unmanned aerial vehicles (UAVs) mobility enables flexible and customized federated learning (FL) at the network edge. However, the underlying uncertainties in the aerial-terrestrial wireless channel may lead to a biased FL model. In particular, the distribution of the global model and the aggregation of the local updates within the FL learning rounds at the UAVs are governed by the reliability of the wireless channel. This creates an undesirable bias towards the training data of ground devices with better channel conditions, and vice versa. This paper characterizes the global bias problem of aerial FL in large-scale UAV networks. To this end, the paper proposes a channel-aware distribution and aggregation scheme to enforce equal contribution from all devices in the FL training as a means to resolve the global bias problem. We demonstrate the convergence of the proposed method by experimenting with the MNIST dataset and show its superiority compared to existing methods. The obtained results enable system parameter tuning to relieve the impact of the aerial channel deficiency on the FL convergence rate.
translated by 谷歌翻译
The following article presents a memetic algorithm with applying deep reinforcement learning (DRL) for solving practically oriented dual resource constrained flexible job shop scheduling problems (DRC-FJSSP). In recent years, there has been extensive research on DRL techniques, but without considering realistic, flexible and human-centered shopfloors. A research gap can be identified in the context of make-to-order oriented discontinuous manufacturing as it is often represented in medium-size companies with high service levels. From practical industry projects in this domain, we recognize requirements to depict flexible machines, human workers and capabilities, setup and processing operations, material arrival times, complex job paths with parallel tasks for bill of material (BOM) manufacturing, sequence-depended setup times and (partially) automated tasks. On the other hand, intensive research has been done on metaheuristics in the context of DRC-FJSSP. However, there is a lack of suitable and generic scheduling methods that can be holistically applied in sociotechnical production and assembly processes. In this paper, we first formulate an extended DRC-FJSSP induced by the practical requirements mentioned. Then we present our proposed hybrid framework with parallel computing for multicriteria optimization. Through numerical experiments with real-world data, we confirm that the framework generates feasible schedules efficiently and reliably. Utilizing DRL instead of random operations leads to better results and outperforms traditional approaches.
translated by 谷歌翻译
A large number of empirical studies on applying self-attention models in the domain of recommender systems are based on offline evaluation and metrics computed on standardized datasets, without insights on how these models perform in real life scenarios. Moreover, many of them do not consider information such as item and customer metadata, although deep-learning recommenders live up to their full potential only when numerous features of heterogeneous types are included. Also, typically recommendation models are designed to serve well only a single use case, which increases modeling complexity and maintenance costs, and may lead to inconsistent customer experience. In this work, we present a reusable Attention-based Fashion Recommendation Algorithm (AFRA), that utilizes various interaction types with different fashion entities such as items (e.g., shirt), outfits and influencers, and their heterogeneous features. Moreover, we leverage temporal and contextual information to address both short and long-term customer preferences. We show its effectiveness on outfit recommendation use cases, in particular: 1) personalized ranked feed; 2) outfit recommendations by style; 3) similar item recommendation and 4) in-session recommendations inspired by most recent customer actions. We present both offline and online experimental results demonstrating substantial improvements in customer retention and engagement.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
在时尚电子商务的快速增长中,时尚文章的远程安装仍然是一个复杂而充满挑战的问题,并且是客户沮丧的主要驱动力。尽管最近在3D虚拟尝试解决方案方面取得了进步,但这种方法仍然限制在非常狭窄的文章(即使是少数文章)中,而且通常只有一种时尚物品。旨在支持客户的其他最先进的方法在网上找到适合他们的方法,主要需要高水平的客户参与度和对隐私敏感的数据(例如身高,体重,年龄,性别,腹部形状等),或者,或者需要穿着紧身衣服的顾客尸体的图像。他们通常还缺乏在大规模上产生合适和塑造视觉指导的能力,仅通过建议订购哪种尺寸,最能与客户的身体属性相匹配,而无需提供有关服装如何合适和外观的任何信息。为了实现飞跃并超越了当前方法的局限性,我们提出了Fitgan,这是一种生成的对抗模型,明确说明了服装的纠缠尺寸和在线时尚的适合特征。以文章的拟合和形状为条件,我们的模型学习了分离的项目表示形式,并生成了逼真的图像,反映了时尚文章的真实拟合和形状。通过大规模的现实世界数据实验,我们演示了我们的方法如何能够合成视觉上现实和各种时尚项目的拟合,并探索其控制数千种在线服装图像的拟合度和形状的能力。
translated by 谷歌翻译
在单个组织中设计和评估时,机器学习(ML)在检测网络攻击中的用途是有效的。然而,通过利用源自若干来源的异构网络数据样本来设计基于ML的检测系统非常具有挑战性。这主要是由于隐私问题和缺乏数据集的普遍格式。在本文中,我们提出了协同联合学习计划来解决这些问题。拟议的框架允许多个组织在设计,培训和评估中加入强大的ML的网络入侵检测系统的武力。威胁情报方案利用其应用的两个关键方面;以通用格式提供网络数据流量的可用性,以允许在数据源上提取有意义的模式。其次,采用联合学习机制来避免在组织之间共享敏感用户信息的必要性。因此,每个组织都与其他组织网络威胁智能受益,同时在内部保持其数据的隐私。该模型在本地培训,只有更新的权重与剩余的参与者共享联合平均过程。通过使用称为NF-UNSW-NB15-V2和NF-BOT-IOT-V2的NETFOL格式的两个密钥数据集,在本文中设计和评估了该框架。在评估过程中考虑了另外两种常见情景;一种集中式培训方法,其中与其他组织共享本地数据样本和本地化培训方法,没有共享威胁情报。结果证明了通过设计通用ML模型的建议框架的效率和有效性,这些框架模型有效地分类源自多个组织的良性和侵入性流量,而无需当地数据交换。
translated by 谷歌翻译
A large number of network security breaches in IoT networks have demonstrated the unreliability of current Network Intrusion Detection Systems (NIDSs). Consequently, network interruptions and loss of sensitive data have occurred, which led to an active research area for improving NIDS technologies. In an analysis of related works, it was observed that most researchers aim to obtain better classification results by using a set of untried combinations of Feature Reduction (FR) and Machine Learning (ML) techniques on NIDS datasets. However, these datasets are different in feature sets, attack types, and network design. Therefore, this paper aims to discover whether these techniques can be generalised across various datasets. Six ML models are utilised: a Deep Feed Forward (DFF), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Decision Tree (DT), Logistic Regression (LR), and Naive Bayes (NB). The accuracy of three Feature Extraction (FE) algorithms; Principal Component Analysis (PCA), Auto-encoder (AE), and Linear Discriminant Analysis (LDA), are evaluated using three benchmark datasets: UNSW-NB15, ToN-IoT and CSE-CIC-IDS2018. Although PCA and AE algorithms have been widely used, the determination of their optimal number of extracted dimensions has been overlooked. The results indicate that no clear FE method or ML model can achieve the best scores for all datasets. The optimal number of extracted dimensions has been identified for each dataset, and LDA degrades the performance of the ML models on two datasets. The variance is used to analyse the extracted dimensions of LDA and PCA. Finally, this paper concludes that the choice of datasets significantly alters the performance of the applied techniques. We believe that a universal (benchmark) feature set is needed to facilitate further advancement and progress of research in this field.
translated by 谷歌翻译